POV-Ray : Newsgroups : povray.general : generating an image_map name: string syntax problem : Re: generating an image_map name: string syntax problem Server Time
30 Jul 2024 12:23:12 EDT (-0400)
  Re: generating an image_map name: string syntax problem  
From: Kenneth
Date: 20 Feb 2009 16:25:01
Message: <web.499f1e4d40230f63f50167bc0@news.povray.org>
"Zeger Knaepen" <zeg### [at] povplacecom> wrote:
> "Kenneth" <kdw### [at] earthlinknet> wrote in message
> news:web.499cf56640230f63f50167bc0@news.povray.org...
> > Does MegaPOV actually blur the motion *during* a single-frame render?
>
> if you ask it to :)
> in MegaPOV you have a camera_view pigment, which gives you, with some
> scripting, the possibility to have multiple camera's in your scene and
> average the result of all of them.  There are two ways of doing this.  The
> first is simply using an average pattern:
> --- START CODE ---
> .......
> #include "screen.inc"
>
> //make sure your real camera is out of the way of the scene
> Set_Camera_Location(<500,5000,500>)
> Set_Camera_Look_At(y*10000)
> Screen_Plane (Camera_motion_blur, 1, 0, 1)
> --- END CODE ---

Fascinating. It's actually very close to a paradigm I had come up with (but only
as a thought experiment) concerning a feature that could be added to POV-Ray
itself--*internally* rendering multiple 'snapshots' of a scene over time, then
averaging them internally, then spitting out the final blurred frame. So
MegaPOV already has that; cool.

You had mentioned earlier that this MegaPOV averaging method produces a more
natural blur than using POV the way I had worked it out (i.e., just averaging
multiple pre-rendered 24-bit frames during a 2nd render.) Is that solely
because you've given MegaPOV fifty 'camera views' to average, vs. my smaller
number of ten? Or is there something about MegaPOV's internal method that
inherently produces a more accurate blur?  I'm most curious about the
difference.

BTW, there *is*, at present, an inherent problem with my own blurring scheme: It
currently applies the multiple averaged images onto a flat box, positioned in
front of my orthographic camera for the 2nd render. And it's quite tricky to
scale the box, to get an exact 1:1 correlation between the pre-rendered images'
pixels and the 'new' camera's camera rays. (I.e., the 2nd camera's rays should
exactly intersect each pixel in the averaged composite image, to get a truly
accurate 2nd render.) I looked through 'screen.inc' to see what I could use
there instead of my box idea, but I couldn't discern if it produces this
*exact* 1:1 correspondence. I'm thinking that it does, but I haven't tried yet.
>
> And the second way is by using MegaPOV's noise_pigment.  It renders faster,
> but I use this method more for testing-purposes only as it doesn't give
> really accurate results:

This is a weird one to understand.  I need to read the MegaPOV documentation to
get a mental picture of what happens here. I'm wondering if it has anything to
do with the idea of 'frameless rendering'?

http://www.acm.org/crossroads/xrds3-4/ellen.html

That introduced me to a new term: 'temporal antialisaing.' The method seems to
be more applicable to real-time rendering, though (of game graphics, for
example.)

Ken W.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.